Outputs Show only Author
Title
Category
Date
Authors
Projects
GNN’s Uncertainty Quantification using Self-Distillation Graph Neural Networks (GNNs) have shown remarkable performance in the healthcare domain. However, what remained challenging is quantifying the predictive uncertainty of GNNs, which is an important aspect of trustworthiness in clinical settings. While Bayesian and ensemble methods can be used to quantify uncertainty, they are computationally expensive. Additionally, the disagreement metric used by ensemble methods to compute uncertainty cannot capture the diversity of models in an ensemble network. In this paper, we propose a novel method, based on knowledge distillation, to quantify GNNs’ uncertainty more efficiently and with higher precision. We apply self-distillation, where the same network serves as both the teacher and student models, thereby avoiding the need to train several networks independently. To ensure the impact of self-distillation, we develop an uncertainty metric that captures the diverse nature of the network by assigning different weights to each GNN classifier. We experimentally evaluate the precision, performance, and ability of our approach in distinguishing out-of-distribution data on two graph datasets: MIMIC-IV and Enzymes. The evaluation results demonstrate that the proposed method can effectively capture the predictive uncertainty of the model while having performance similar to that of the MC Dropout and ensemble methods. TRS1 1.10 Toronto Metropolitan University, Vector Institute, McMaster University Publication 2025-06-18 TRS1 1.10 61. Scoping Review of Deep Learning Applications in Child and Youth Mental Health Research Toronto Metropolitan University, Vector Institute, McMaster University, University of British Columbia Publication 2025-04-09 Manasvi Vanama, Simran Saggu, Krysten DeSouza,
Daneshvar, H. , Cassandra Czobit, Ahmad Mauluddin, Judy Zhao,
Samavi, R. , Thomas E Doyle, Paulo Pires,
Sassi, R. , Laura Duncan
157. Predicting Child and Youth ED Visits With Large Language Models Vector Institute, McMaster University, Toronto Metropolitan University, University of British Columbia Publication 2025-04-09 Cassandra Czobit,
Samavi, R. ,
Daneshvar, H. ,
Sassi, R. , Laura Duncan, Ahmad Mauluddin, Judy Zhao, Paulo Pires, Thomas E Doyle
Efficient Subsampling for GNN Downstream Tasks While Graph Neural Networks (GNNs) have shown significant promise for data integration using graph structures, methods to support subsampling graph data are lagging. To address this gap, in this paper, we propose a novel importance-based data subsampling framework. This framework strategically identifies inputs from a primary graph dataset based on their impact on the model's learning of downstream tasks, such as graph or node classification. Our measure of impact is the predictive uncertainty of each data point. To ensure the subsample is well-representative of the original sample, we cluster them based on their learned graph representation. Finally, subsampling is performed from these identified clusters. The process favours selecting data points with greater prediction uncertainty, while preserving the diversity of the overall sample. We evaluate our approach using a multi-source, real-world dataset on child and youth mental health, comprising emergency department (ED) admissions and mental health questionnaire data. Our experimental results demonstrate that training a GNN with samples identified by the proposed framework yields a statistically significant improvement (on average, 10.13% improvement across metrics from the baseline approach) in predictive performance compared to training on a randomly selected subset of patients. The code is available at https://github.com/tailabTMU/GSS.TRS1 1.10 Toronto Metropolitan University, McMaster University Publication 2025-11-24 TRS1 1.10